A fog-aided wireless network architecture is studied in which edge-nodes(ENs), such as base stations, are connected to a cloud processor via dedicatedfronthaul links, while also being endowed with caches. Cloud processing enablesthe centralized implementation of cooperative transmission strategies at theENs, albeit at the cost of an increased latency due to fronthaul transfer. Incontrast, the proactive caching of popular content at the ENs allows for thelow-latency delivery of the cached files, but with generally limitedopportunities for cooperative transmission among the ENs. The interplay betweencloud processing and edge caching is addressed from an information-theoreticviewpoint by investigating the fundamental limits of a highSignal-to-Noise-Ratio (SNR) metric, termed normalized delivery time (NDT),which captures the worst-case coding latency for delivering any requestedcontent to the users. The NDT is defined under the assumptions of either serialor pipelined fronthaul-edge transmission, and is studied as a function offronthaul and cache capacity constraints. Placement and delivery strategiesacross both fronthaul and wireless, or edge, segments are proposed with the aimof minimizing the NDT. Information-theoretic lower bounds on the NDT are alsoderived. Achievability arguments and lower bounds are leveraged to characterizethe minimal NDT in a number of important special cases, including systems withno caching capabilities, as well as to prove that the proposed schemes achieveoptimality within a constant multiplicative factor of 2 for all values of theproblem parameters.
展开▼